摘要 :
Given most continuous h: [a,b] → R and ε > 0, we show how to obtain a neural net which will approximate h, to within ε,uniformly over [a, b]. To construct this neural net, we first train a fuzzy neural net on a finite training ...
展开
Given most continuous h: [a,b] → R and ε > 0, we show how to obtain a neural net which will approximate h, to within ε,uniformly over [a, b]. To construct this neural net, we first train a fuzzy neural net on a finite training set, and the needed neural net is the defuzzified trained fuzzy neural net.
收起
摘要 :
A hardware implementation design of parallelized fuzzy Adaptive Resonance Theory neural network is described and simulated. Parallel category choice and resonance are implemented in the network. Continuous-time and discrete-time w...
展开
A hardware implementation design of parallelized fuzzy Adaptive Resonance Theory neural network is described and simulated. Parallel category choice and resonance are implemented in the network. Continuous-time and discrete-time winner-take-all neural circuits identifying the largest of M inputs are used as the winner-take-all units. The continuous-time circuit is described by a state equation with a discontinuous right-hand side. The discrete-time counterpart is governed by a difference equation. Corresponding functional block-diagrams of the circuits include M feed-forward hard-limiting neurons and one feedback neuron, which is used to compute the dynamic shift of inputs. The circuits combine arbitrary finite resolution of inputs, high convergence speed to the winner-take-all operation, low computational and hardware implementation complexity, and independence of initial conditions. The circuits are also used for finding elements of input vector with minimal/maximal values to normalize them in the range [0, 1].
收起
摘要 :
A hardware implementation design of parallelized fuzzy Adaptive Resonance Theory neural network is described and simulated. Parallel category choice and resonance are implemented in the network. Continuous-time and discrete-time w...
展开
A hardware implementation design of parallelized fuzzy Adaptive Resonance Theory neural network is described and simulated. Parallel category choice and resonance are implemented in the network. Continuous-time and discrete-time winner-take-all neural circuits identifying the largest of M inputs are used as the winner-take-all units. The continuous-time circuit is described by a state equation with a discontinuous right-hand side. The discrete-time counterpart is governed by a difference equation. Corresponding functional block-diagrams of the circuits include M feed-forward hard-limiting neurons and one feedback neuron, which is used to compute the dynamic shift of inputs. The circuits combine arbitrary finite resolution of inputs, high convergence speed to the winner-take-all operation, low computational and hardware implementation complexity, and independence of initial conditions. The circuits are also used for finding elements of input vector with minimal/maximal values to normalize them in the range [0, 1].
收起
摘要 :
In order to better predict the trend of temperature in future regions, a time recurrent neural network algorithm LSTM is proposed to predict regional temperature trends. This paper obtains temperature changes in Alberta, Quebec, a...
展开
In order to better predict the trend of temperature in future regions, a time recurrent neural network algorithm LSTM is proposed to predict regional temperature trends. This paper obtains temperature changes in Alberta, Quebec, and Saskatchewan, Canada. Based on the average temperature timing characteristics of each province, LSTM (long short-term memory) is used to analyze the provinces of Canada. Temperature and time trends of temperature and modelling, predicting temperature changes in future Canadian provinces; The results show that after the above model predicts the temperature change trend for the next three years, the predicted temperature change trend is almost consistent with the existing data, and the prediction accuracy is also relatively high. Therefore, the LSTM algorithm based on this paper can be applied to the prediction of regional temperature trends, and the prediction results and accuracy are very good, which has certain value and significance for real life.
收起
摘要 :
In order to better predict the trend of temperature in future regions, a time recurrent neural network algorithm LSTM is proposed to predict regional temperature trends. This paper obtains temperature changes in Alberta, Quebec, a...
展开
In order to better predict the trend of temperature in future regions, a time recurrent neural network algorithm LSTM is proposed to predict regional temperature trends. This paper obtains temperature changes in Alberta, Quebec, and Saskatchewan, Canada. Based on the average temperature timing characteristics of each province, LSTM (long short-term memory) is used to analyze the provinces of Canada. Temperature and time trends of temperature and modelling, predicting temperature changes in future Canadian provinces; The results show that after the above model predicts the temperature change trend for the next three years, the predicted temperature change trend is almost consistent with the existing data, and the prediction accuracy is also relatively high. Therefore, the LSTM algorithm based on this paper can be applied to the prediction of regional temperature trends, and the prediction results and accuracy are very good, which has certain value and significance for real life.
收起
摘要 :
This paper proposes a Fast Graph Convolutional Neural Network (FGRNN) architecture to predict sequences with an underlying graph structure. The proposed architecture addresses the limitations of the standard recurrent neural netwo...
展开
This paper proposes a Fast Graph Convolutional Neural Network (FGRNN) architecture to predict sequences with an underlying graph structure. The proposed architecture addresses the limitations of the standard recurrent neural network (RNN), namely, vanishing and exploding gradients, causing numerical instabilities during training. State-of-the-art architectures that combine gated RNN architectures, such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) with graph convolutions are known to improve the numerical stability during the training phase, but at the expense of the model size involving a large number of training parameters. FGRNN addresses this problem by adding a weighted residual connection with only two extra training parameters as compared to the standard RNN. Numerical experiments on the real 3D point cloud dataset corroborates the proposed architecture.
收起
摘要 :
This paper proposes a Fast Graph Convolutional Neural Network (FGRNN) architecture to predict sequences with an underlying graph structure. The proposed architecture addresses the limitations of the standard recurrent neural netwo...
展开
This paper proposes a Fast Graph Convolutional Neural Network (FGRNN) architecture to predict sequences with an underlying graph structure. The proposed architecture addresses the limitations of the standard recurrent neural network (RNN), namely, vanishing and exploding gradients, causing numerical instabilities during training. State-of-the-art architectures that combine gated RNN architectures, such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) with graph convolutions are known to improve the numerical stability during the training phase, but at the expense of the model size involving a large number of training parameters. FGRNN addresses this problem by adding a weighted residual connection with only two extra training parameters as compared to the standard RNN. Numerical experiments on the real 3D point cloud dataset corroborates the proposed architecture.
收起
摘要 :
This paper presents a new quantization model of neural nets describing the change of material configuration of neural nets by the Hamilton-Jacobi equations and the general activity of neural nets by the Schrodinger equation. Rest ...
展开
This paper presents a new quantization model of neural nets describing the change of material configuration of neural nets by the Hamilton-Jacobi equations and the general activity of neural nets by the Schrodinger equation. Rest on the phase-space representation of the degrees of feedom of the net and on the potential-oriented formulation of the activity states of neural nets a Hamilton-operator can be defined that describes the classification-and the conditioning-state of the net as a distrubed equilibrium state. As a result of this theory it can be shown that supervised and unsupervised neural nets can be understood as the same kind of n-particle systems with only different Lagrange functions.
收起
摘要 :
This paper presents a new quantization model of neural nets describing the change of material configuration of neural nets by the Hamilton-Jacobi equations and the general activity of neural nets by the Schrodinger equation. Rest ...
展开
This paper presents a new quantization model of neural nets describing the change of material configuration of neural nets by the Hamilton-Jacobi equations and the general activity of neural nets by the Schrodinger equation. Rest on the phase-space representation of the degrees of feedom of the net and on the potential-oriented formulation of the activity states of neural nets a Hamilton-operator can be defined that describes the classification-and the conditioning-state of the net as a distrubed equilibrium state. As a result of this theory it can be shown that supervised and unsupervised neural nets can be understood as the same kind of n-particle systems with only different Lagrange functions.
收起
摘要 :
The energy consumed by running large deep neural networks (DNNs) on hardware accelerators is dominated by the need for lots of fast memory to store both states and weights. This large required memory is currently only economically...
展开
The energy consumed by running large deep neural networks (DNNs) on hardware accelerators is dominated by the need for lots of fast memory to store both states and weights. This large required memory is currently only economically viable through DRAM. Although DRAM is high-throughput and low-cost memory (costing 20X less than SRAM), its long random access latency is bad for the unpredictable access patterns in spiking neural networks (SNNs). In addition, accessing data from DRAM costs orders of magnitude more energy than doing arithmetic with that data. SNNs are energy-efficient if local memory is available and few spikes are generated. This paper reports on our developments over the last 5 years of convolutional and recurrent deep neural network hardware accelerators that exploit either spatial or temporal sparsity similar to SNNs but achieve SOA throughput, power efficiency and latency even with the use of DRAM for the required storage of the weights and states of large DNNs.
收起